|
In mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. Numbers such as the real or complex numbers can be multiplied according to elementary arithmetic. On the other hand, matrices are ''arrays of numbers'', so there is no unique way to define "the" multiplication of matrices. As such, in general the term "matrix multiplication" refers to a number of different ways to multiply matrices. The key features of any matrix multiplication include: the number of rows and columns the original matrices have (called the "size", "order" or "dimension"), and specifying how the entries of the matrices generate the new matrix. Like vectors, matrices of any size can be multiplied by scalars, which amounts to multiplying every entry of the matrix by the same number. Similar to the entrywise definition of adding or subtracting matrices, multiplication of two matrices of the same size can be defined by multiplying the corresponding entries, and this is known as the Hadamard product. Another definition is the Kronecker product of two matrices, to obtain a block matrix. One can form many other definitions. However, the most useful definition can be motivated by linear equations and linear transformations on vectors, which have numerous applications in applied mathematics, physics, and engineering. This definition is often called ''the'' matrix product. In words, if is an matrix and is an matrix, their matrix product is an matrix, in which the entries across the rows of are multiplied with the entries down the columns of (the precise definition is below). This definition is not commutative, although it still retains the associative property and is distributive over entrywise addition of matrices. The identity element of the matrix product is the identity matrix (analogous to multiplying numbers by 1), and a square matrix may have an inverse matrix (analogous to the multiplicative inverse of a number). A consequence of the matrix product is determinant multiplicativity. The matrix product is an important operation in linear transformations, matrix groups, and the theory of group representations and irreps. Computing matrix products is both a central operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing , especially for large matrices. This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g. , vectors in lowercase bold, e.g. , and entries of vectors and matrices are italic (since they are scalars), e.g. and . Index notation is often the clearest way to express definitions, and is used as standard in the literature. The entry of matrix is indicated by or , whereas a numerical label (not matrix entries) on a collection of matrices is subscripted only, e.g. , etc. == Scalar multiplication == (詳細はKronecker product. The left scalar multiplication of a matrix with a scalar gives another matrix of the same size as . The entries of are defined by : explicitly: : Similarly, the right scalar multiplication of a matrix with a scalar is defined to be : explicitly: : When the underlying ring is commutative, for example, the real or complex number field, these two multiplications are the same, and are simply called ''scalar multiplication''. However, for matrices over a more general ring that are ''not'' commutative, such as the quaternions, they may not be equal. For a real scalar and matrix: : : For quaternion scalars and matrices: : : where are the quaternion units. The non-commutativity of quaternion multiplication prevents the transition of changing to . 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Matrix multiplication」の詳細全文を読む スポンサード リンク
|